ro 2
High-Resolution Water Sampling via a Solar-Powered Autonomous Surface Vehicle
Mamani, Misael, Fernandez, Mariel, Luna, Grace, Limachi, Steffani, Apaza, Leonel, Montes-Dávalos, Carolina, Herrera, Marcelo, Salcedo, Edwin
Accurate water quality assessment requires spatially resolved sampling, yet most unmanned surface vehicles (USVs) can collect only a limited number of samples or rely on single-point sensors with poor representativeness. This work presents a solar-powered, fully autonomous USV featuring a novel syringe-based sampling architecture capable of acquiring 72 discrete, contamination-minimized water samples per mission. The vehicle incorporates a ROS 2 autonomy stack with GPS-RTK navigation, LiDAR and stereo-vision obstacle detection, Nav2-based mission planning, and long-range LoRa supervision, enabling dependable execution of sampling routes in unstructured environments. The platform integrates a behavior-tree autonomy architecture adapted from Nav2, enabling mission-level reasoning and perception-aware navigation. A modular 6x12 sampling system, controlled by distributed micro-ROS nodes, provides deterministic actuation, fault isolation, and rapid module replacement, achieving spatial coverage beyond previously reported USV-based samplers. Field trials in Achocalla Lagoon (La Paz, Bolivia) demonstrated 87% waypoint accuracy, stable autonomous navigation, and accurate physicochemical measurements (temperature, pH, conductivity, total dissolved solids) comparable to manually collected references. These results demonstrate that the platform enables reliable high-resolution sampling and autonomous mission execution, providing a scalable solution for aquatic monitoring in remote environments.
- North America > Canada (0.28)
- South America > Bolivia > La Paz Department > Pedro Domingo Murillo Province > La Paz (0.24)
- Asia > Malaysia (0.04)
- (5 more...)
- Water & Waste Management > Water Management > Water Supplies & Services (1.00)
- Government (1.00)
- Energy > Renewable > Solar (1.00)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.93)
Capability-Driven Skill Generation with LLMs: A RAG-Based Approach for Reusing Existing Libraries and Interfaces
da Silva, Luis Miguel Vieira, Köcher, Aljosha, König, Nicolas, Gehlhoff, Felix, Fay, Alexander
Modern automation systems increasingly rely on modular architectures, with capabilities and skills as one solution approach. Capabilities define the functions of resources in a machine-readable form and skills provide the concrete implementations that realize those capabilities. However, the development of a skill implementation conforming to a corresponding capability remains a time-consuming and challenging task. In this paper, we present a method that treats capabilities as contracts for skill implementations and leverages large language models to generate executable code based on natural language user input. A key feature of our approach is the integration of existing software libraries and interface technologies, enabling the generation of skill implementations across different target languages. We introduce a framework that allows users to incorporate their own libraries and resource interfaces into the code generation process through a retrieval-augmented generation architecture. The proposed method is evaluated using an autonomous mobile robot controlled via Python and ROS 2, demonstrating the feasibility and flexibility of the approach.
- Research Report (1.00)
- Workflow (0.95)
AUTOSAR AP and ROS 2 Collaboration Framework
Iwakami, Ryudai, Peng, Bo, Hanyu, Hiroyuki, Ishigooka, Tasuku, Azumi, Takuya
Personal use of this material is permitted. Abstract--The field of autonomous vehicle research is advancing rapidly, necessitating platforms that meet real-t ime performance, safety, and security requirements for practi cal deployment. AUTOSAR Adaptive Platform (AUTOSAR AP) is widely adopted in development to meet these criteria; howev er, licensing constraints and tool implementation challenges limit its use in research. Conversely, Robot Operating System 2 (ROS 2) is predominantly used in research within the autonomous dri ving domain, leading to a disparity between research and develop ment platforms that hinders swift commercialization. This pape r proposes a collaboration framework that enables AUTOSAR AP and ROS 2 to communicate with each other using a Data Distribution Service for Real-Time Systems (DDS). The proposed framework bridges these protocol differences, ensuring seamless int eraction between the two platforms. Furthermore, the availabilit y of the proposed collaboration framework is improved by automatic ally generating a configuration file for the proposed bridge conve rter . Autonomous driving technology [1] has rapidly advanced, drawing worldwide attention and spurring significant resea rch and development efforts. This surge in activity has brought autonomous vehicles closer to widespread practical use for transporting both people and goods. As these vehicles approach commercial readiness, consumer expectations and th e landscape of automotive development are evolving. Researc h in this domain is dynamic, propelled by technological advancements and market demand. Autonomous driving systems must process data from numerous sensors and cameras in real time, necessitating high technical competence.
- Automobiles & Trucks (0.87)
- Information Technology (0.55)
- Transportation > Ground > Road (0.55)
- Education > Educational Setting > Higher Education (0.40)
Simulating an Autonomous System in CARLA using ROS 2
Abdo, Joseph, Shibu, Aditya, Saeed, Moaiz, Aga, Abdul Maajid, Sivaprazad, Apsara, Al-Musleh, Mohamed
Abstract--Autonomous racing offers a rigorous setting to stress test perception, planning, and control under high speed and uncertainty. This paper proposes an approach to design and evaluate a software stack for an autonomous race car in CARLA: Car Learning to Act simulator, targeting competitive driving performance in the Formula Student UK Driverless (FS-AI) 2025 competition. Optimized trajectories are computed considering vehicle dynamics and simulated environmental factors such as visibility and lighting to navigate the track efficiently. The complete autonomous stack is implemented in ROS 2 and validated extensively in CARLA on a dedicated vehicle (ADS-DV) before being ported to the actual hardware, which includes the Jetson AGX Orin 64GB, ZED2i Stereo Camera, Robosense Helios 16P LiDAR, and CHCNA V Inertial Navigation System (INS). The Formula Student Driverless (FS-AI) competition has stimulated research on autonomous racing software stacks validated through both real world testing and simulation.
- Asia > Middle East > UAE > Dubai Emirate > Dubai (0.05)
- Europe > Portugal > Porto > Porto (0.04)
Testing and Evaluation of Underwater Vehicle Using Hardware-In-The-Loop Simulation with HoloOcean
Meyers, Braden, Mangelson, Joshua G.
Testing marine robotics systems in controlled environments before field tests is challenging, especially when acoustic-based sensors and control surfaces only function properly underwater. Deploying robots in indoor tanks and pools often faces space constraints that complicate testing of control, navigation, and perception algorithms at scale. Recent developments of high-fidelity underwater simulation tools have the potential to address these problems. We demonstrate the utility of the recently released HoloOcean 2.0 simulator with improved dynamics for torpedo AUV vehicles and a new ROS 2 interface. We have successfully demonstrated a Hardware-in-the-Loop (HIL) and Software-in-the-Loop (SIL) setup for testing and evaluating a CougUV torpedo autonomous underwater vehicle (AUV) that was built and developed in our lab. With this HIL and SIL setup, simulations are run in HoloOcean using a ROS 2 bridge such that simulated sensor data is sent to the CougUV (mimicking sensor drivers) and control surface commands are sent back to the simulation, where vehicle dynamics and sensor data are calculated. We compare our simulated results to real-world field trial results.
- North America > United States > California > Monterey County > Monterey (0.04)
- Europe > France (0.04)
- Asia > Japan > Honshū > Tōhoku > Miyagi Prefecture > Sendai (0.04)
- Asia > Japan > Honshū > Kansai > Kyoto Prefecture > Kyoto (0.04)
A CODECO Case Study and Initial Validation for Edge Orchestration of Autonomous Mobile Robots
Zhu, H., Samizadeh, T., Sofia, R. C.
Hongyu Zhu, Tina Samizadeh, Rute C. Sofia fortiss - research Institute of the Free State of Bavaria associated with the Technical University of Munich (TUM) Abstract--Autonomous Mobile Robots (AMRs) increasingly adopt containerized micro-services across the Edge-Cloud continuum. While Kubernetes is the de-facto orchestrator for such systems, its assumptions--stable networks, homogeneous resources, and ample compute capacity do not fully hold in mobile, resource-constrained robotic environments. The paper describes a case-study on smart-manufacturing AMR and performs an initial comparison between CODECO orchestration and standard Kubernetes using a controlled Kubernetes-in-Docker (KinD) environment. Metrics include pod deployment and deletion times, CPU and memory usage, and inter-pod data rates. The observed results indicate that CODECO offers reduced CPU consumption and more stable communication patterns, at the cost of modest memory overhead ( 10-15%) and slightly increased pod lifecycle latency due to secure overlay initialization. Kubernetes provides declarative configuration, automated scaling, and robust availability mechanisms that make it highly effective in cloud data-centers. However, its design assumptions, namely, the existence of relatively stable networks, abundant compute resources, and largely static infrastructure, do not fully hold in Edge-Edge and Edge-Cloud environments. In such settings, resources can be constrained and heterogeneous.
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.24)
- North America > United States (0.04)
- Information Technology > Cloud Computing (1.00)
- Information Technology > Artificial Intelligence > Robots > Locomotion (0.61)
Work-in-Progress: Function-as-Subtask API Replacing Publish/Subscribe for OS-Native DAG Scheduling
Ishikawa-Aso, Takahiro, Yano, Atsushi, Kobayashi, Yutaro, Jin, Takumi, Takano, Yuuki, Kato, Shinpei
The Directed Acyclic Graph (DAG) task model for real-time scheduling finds its primary practical target in Robot Operating System 2 (ROS 2). However, ROS 2's publish/subscribe API leaves DAG precedence constraints unenforced: a callback may publish mid-execution, and multi-input callbacks let developers choose topic-matching policies. Thus preserving DAG semantics relies on conventions; once violated, the model collapses. We propose the Function-as-Subtask (FasS) API, which expresses each subtask as a function whose arguments/return values are the subtask's incoming/outgoing edges. By minimizing description freedom, DAG semantics is guaranteed at the API rather than by programmer discipline. We implement a DAG-native scheduler using FasS on a Rust-based experimental kernel and evaluate its semantic fidelity, and we outline design guidelines for applying FasS to Linux Linux sched_ext.
Lite VLA: Efficient Vision-Language-Action Control on CPU-Bound Edge Robots
Williams, Justin, Gupta, Kishor Datta, George, Roy, Sarkar, Mrinmoy
The deployment of artificial intelligence models at the edge is increasingly critical for autonomous robots operating in GPS-denied environments where local, resource-efficient reasoning is essential. This work demonstrates the feasibility of deploying small Vision-Language Models (VLMs) on mobile robots to achieve real-time scene understanding and reasoning under strict computational constraints. Unlike prior approaches that separate perception from mobility, the proposed framework enables simultaneous movement and reasoning in dynamic environments using only on-board hardware. The system integrates a compact VLM with multimodal perception to perform contextual interpretation directly on embedded hardware, eliminating reliance on cloud connectivity. Experimental validation highlights the balance between computational efficiency, task accuracy, and system responsiveness. Implementation on a mobile robot confirms one of the first successful deployments of small VLMs for concurrent reasoning and mobility at the edge. This work establishes a foundation for scalable, assured autonomy in applications such as service robotics, disaster response, and defense operations.
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
ROSBag MCP Server: Analyzing Robot Data with LLMs for Agentic Embodied AI Applications
Fu, Lei, Salimpour, Sahar, Militano, Leonardo, Edelman, Harry, Queralta, Jorge Peña, Toffetti, Giovanni
Agentic AI systems and Physical or Embodied AI systems have been two key research verticals at the forefront of Artificial Intelligence and Robotics, with Model Context Protocol (MCP) increasingly becoming a key component and enabler of agentic applications. However, the literature at the intersection of these verticals, i.e., Agentic Embodied AI, remains scarce. This paper introduces an MCP server for analyzing ROS and ROS 2 bags, allowing for analyzing, visualizing and processing robot data with natural language through LLMs and VLMs. We describe specific tooling built with robotics domain knowledge, with our initial release focused on mobile robotics and supporting natively the analysis of trajectories, laser scan data, transforms, or time series data. This is in addition to providing an interface to standard ROS 2 CLI tools ("ros2 bag list" or "ros2 bag info"), as well as the ability to filter bags with a subset of topics or trimmed in time. Coupled with the MCP server, we provide a lightweight UI that allows the benchmarking of the tooling with different LLMs, both proprietary (Anthropic, OpenAI) and open-source (through Groq). Our experimental results include the analysis of tool calling capabilities of eight different state-of-the-art LLM/VLM models, both proprietary and open-source, large and small. Our experiments indicate that there is a large divide in tool calling capabilities, with Kimi K2 and Claude Sonnet 4 demonstrating clearly superior performance. We also conclude that there are multiple factors affecting the success rates, from the tool description schema to the number of arguments, as well as the number of tools available to the models. The code is available with a permissive license at https://github.com/binabik-ai/mcp-rosbags.
- North America > United States (0.14)
- Europe > Switzerland > Zürich > Zürich (0.04)
- Europe > Finland > Southwest Finland > Turku (0.04)
TUM Teleoperation: Open Source Software for Remote Driving and Assistance of Automated Vehicles
Kerbl, Tobias, Brecht, David, Gehrke, Nils, Karunainayagam, Nijinshan, Krauss, Niklas, Pfab, Florian, Taupitz, Richard, Trautmannsheimer, Ines, Su, Xiyan, Wolf, Maria-Magdalena, Diermeyer, Frank
Abstract-- T eleoperation is a key enabler for future mobility, supporting Automated V ehicles in rare and complex scenarios beyond the capabilities of their automation. Despite ongoing research, no open source software currently combines Remote Driving, e.g., via steering wheel and pedals, Remote Assistance through high-level interaction with automated driving software modules, and integration with a real-world vehicle for practical testing. T o address this gap, we present a modular, open source teleoperation software stack that can interact with an automated driving software, e.g., Autoware, enabling Remote Assistance and Remote Driving. The software features standardized interfaces for seamless integration with various real-world and simulation platforms, while allowing for flexible design of the human-machine interface. The system is designed for modularity and ease of extension, serving as a foundation for collaborative development on individual software components as well as realistic testing and user studies. T o demonstrate the applicability of our software, we evaluated the latency and performance of different vehicle platforms in simulation and real-world. Teleoperation enables remote support of robots over mobile networks, allowing humans to handle tasks that cannot be fully automated. In the field of intelligent vehicles, tele-operation has gained traction, with companies like Fernride and V ay deploying remote driving solutions for logistics and car sharing, gathering significant funding [1, 2]. Tele-operation also supports Automated V ehicles (A Vs) during disengagements, as seen with Waymo and Zoox, which rely on Remote Operators (ROs) when A Vs cannot resolve a scenario [3, 4].
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- North America > United States > Hawaii (0.04)
- Europe > Switzerland (0.04)
- Transportation > Ground > Road (1.00)
- Automobiles & Trucks (1.00)
- Information Technology > Robotics & Automation (0.69)
- Information Technology > Software (1.00)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles (1.00)